Engineering Mind Blindness,.. Part 2

 

  

 

The Invisible Foundation

Why Your Brain Ignores the Most Important Part of Every Simulation

Part Two: Training the Checking Habit

 

By Joseph McFadden Sr.

Engineering Fellow, Zebra Technologies  |  Professor of Mechanical Engineering, Fairfield University

McFaddenCAE.com

 

 

Part of the Building Intuition Before Equations series

Continues from Part One: The Problem


 

 

Where We Left Off

 

In Part One, we established the mechanism. The human brain is a prediction machine — optimized over hundreds of thousands of years to suppress everything it already expects to see. When a prediction matches reality, the brain does almost nothing. When there is a mismatch, it mobilizes.

 

The blind spot always lands in the same place: the foundational, the familiar, the small. The density value in the material card. The unit system inherited from the last program. The vendor model accepted without verification because it came in the right format and the numbers looked plausible.

 

We also introduced a new variable: artificial intelligence — the most powerful fluency generator ever introduced into our workflow — amplifies that blind spot in specific and technically precise ways. AI trained on SI data gives you a density of 7,850 when your tonne-millimeter-second model needs 7.85E-9. The output is polished. The format is professional. The prediction machine classifies it as handled. The gorilla has walked through the room and nobody saw it.

 

Knowing the problem is only half the work. Part Two is what you do about it.

 

Chapter Ten

The Warning Signals

 

The prediction machine leaves fingerprints before it suppresses. There are specific internal states that reliably precede attentional blindness — and in simulation work, they show up in recognizable patterns. Learn to read them and you have an early-warning system that runs on your own cognition rather than external process alone.

 

The first warning signal is fluency.

 

When a model setup feels almost frictionless — the inputs look right, the material card looks familiar, the boundary conditions are the same ones you used last program — that smoothness is not always evidence that everything is correct. Sometimes it is the prediction machine running unchecked, classifying the entire setup as handled without actually verifying it.

 

Smooth is not always right. Smooth is the exact texture of the brain not checking. When a simulation feels too easy to set up, that is precisely the moment to slow down.

 

The second warning signal is high stakes combined with high familiarity.

 

This pairing is the profile of nearly every major simulation failure on record. Not a junior engineer encountering a unit system for the first time. An experienced analyst running the same class of model they have run hundreds of times before. The Orbiter navigation team ran trajectory corrections every week for nine months. The Hubble fabrication team used the most trusted null corrector in their inventory. High familiarity gave their prediction machines permission to stop looking. Whenever you are working on a model type you know well — especially under schedule pressure — treat that familiarity as a yellow flag, not a green one.

 

The third warning signal is emotional investment.

 

The brain can respond to contradictory evidence in two ways: update the model to fit the evidence, or discount the evidence to protect the existing model. Both reduce prediction error. Both feel like resolution. But only one corresponds to reality. When you realize you want the simulation to validate the design — when the program is riding on a passing result, when a deadline is in three days — that is precisely the moment to look hardest at the density value. The Hubble engineers wanted the mirror to be perfect. That investment made the contradictory signal invisible to them. Name the investment when you feel it. Then check the density.

 

The fourth warning signal is the transition point.

 

Unit errors cluster at handoffs. When you inherit a model from a vendor. When you receive a material card from a colleague who built their model in a different unit system. When you migrate from implicit to explicit analysis. When the geometry comes from one organization and the material library comes from another. At transitions, the prediction machine has the least continuity. What was assumed coming in and what is being assumed going out rarely get stated explicitly. The Orbiter was built by Lockheed Martin and navigated by JPL. The Vasa was built by two teams using different feet. Every handoff is a yellow flag.

 

The Four Warning Signals  1. Fluency — the model feels too easy to set up.  2. High stakes + high familiarity — experienced analyst, familiar model type.  3. Emotional investment — you want a specific result.  4. Transition points — any handoff between teams, programs, or unit systems.

 

Chapter Eleven

Three Practices That Work

 

With the warning signals identified, let's build the habits. Three specific practices will train the checking habit. Each is grounded in peer-reviewed neuroscience. The first two build the internal model. The third requires understanding a risk before it becomes a remedy.

 

The first practice: daily reflective writing.

 

Not a project log. Not a simulation report. A genuine diagnostic on your own thinking. At the end of a working day, pick one model you touched, one assumption you made without questioning it, one task that felt smooth. Write about how you approached it, what you assumed about the unit system, what you verified and what you classified as handled. Was the smoothness earned — did you actually check the density? Or was the smoothness the prediction machine running unchecked? Five to ten minutes.

 

Researcher Dorit Alt at Kinneret College ran a thirteen-week study with ninety-seven students using exactly this approach. Students who interrogated their own reasoning — not just described what they did, but asked why they did it that way and what they might have missed — showed measurable improvement in catching their own thinking running on autopilot. Students who simply summarized their work showed almost no benefit. The journal is not a diary. It is a diagnostic.

 

Neuroscientist Matthew Lieberman at UCLA added the deeper layer: putting a cognitive experience into precise language activates the prefrontal cortex — the deliberate checking brain — while simultaneously damping the fast reactive system. The act of writing about how you are thinking literally shifts neural control from automatic to deliberate. Over time, this rewires the prediction machine itself. Once unit verification becomes part of what correct simulation setup feels like, the check stops being an effortful interruption. It becomes part of the foveal view.

 

The second practice: narrate critical checks out loud.

 

Before you submit the simulation run — especially a model you did not build, especially one inherited from a vendor or a previous program — say the critical checks out loud.

 

I know how that sounds. And notice what your prediction machine just did — it classified that idea as familiar, filed it under "that is what odd people do," and moved on before the evidence had a chance to land. Which is, with some irony, exactly the behavior we have been discussing.

 

Psychologists Alexander Kirkham and Paloma Mari-Beffa at Bangor University found that reading instructions aloud produces higher concentration and better performance than reading silently — because auditory commands are simply better controllers of behavior than silent ones. Researcher Kyle Cox found that people who narrate tasks out loud complete them twenty-five percent faster and with fewer errors.

 

Silent inner speech is compressed and fragmented. We think in shorthand — abbreviated conclusions, assumed steps, skipped logic. When you speak out loud, you are forced to complete the sentence. Assumptions you did not know you were making have to come out of your mouth. And sometimes, when they do, they sound exactly as shaky as they are.

 

In practice, before you run the analysis: say the density value out loud, say what unit system it corresponds to, say whether it matches your geometry units, say what the total mass should be for the physical part and what the model is reporting. If you cannot say those things with confidence, you have found the gap before the simulation found it for you.

 

It does not need an audience. It needs your ears. And we all walk around with wireless earbuds now. Nobody knows who you are talking to.

 

The third practice: use artificial intelligence as a Socratic partner, not an oracle.

 

In Chapter Nine, we named the risk: AI produces polished, confident, structurally complete output — exactly the kind of fluency the prediction machine uses to classify something as handled and move on.

 

Now here is the turn. The same tool, used differently, is one of the most powerful instruments for narrowing the blind spot that any of us have ever had access to.

 

When I use artificial intelligence as an answer machine — asking it to generate the material card, accepting the fluent result, moving on — I am handing the prediction machine exactly what it wants. A smooth, complete, handled signal. The blind spot widens. When I use it as a Socratic partner — asking it to challenge the inputs, to find what I am assuming, to argue that something might be wrong — I am generating the deliberate friction the prediction machine works hardest to suppress. The blind spot narrows.

 

Socratic Prompts for Material Verification  'What should steel density be in tonne-mm-s?'  |  'Does this density match my unit system?'  |  'Argue that this material card has a unit error.'  |  'What would I expect to see in results if density were wrong by a factor of a million?'  |  'What inconsistencies might exist between a gram-mm-ms card and tonne-mm-s?'

 

When you bring those questions to a capable artificial intelligence system rather than asking it for the answer, you have turned the fluency generator into a checking mechanism. The byline on this essay says it plainly: formatted and expanded with artificial intelligence — not to be told what to write, but to debate and build upon the work. That sentence is not a disclaimer. It is a description of the practice. The thinking has to happen with the tool, not be replaced by it.

 

Chapter Twelve

Building Awareness, Not Punishment

 

So what do we do with all of this? You do not fight three hundred thousand years of optimization. You design around it.

 

In the classroom, when I see a unit error, I do not just mark it wrong. I stop and ask: "What does that number physically mean? What would it feel like? How heavy is that? How fast is that?" Because if the student can answer those questions — if they can feel the physics — the units come along for free. You do not need to remind someone who truly understands stress to write megapascals. The unit is part of their mental image. This is what I mean by building intuition before equations.

 

The individual plan is four steps.

 

Step one: know your own warning signals specifically, not abstractly. Where in your workflow does fluency tend to hide errors? Inherited vendor models? Explicit dynamics jobs where the millisecond system tempts the wrong density? Models carried forward from a previous program? Write those down. Your blind spot profile is different from your colleague's. Know yours.

 

Step two: design deliberate interrupt points at exactly those locations. A written check before you accept any external material card. A mass verification step at the end of every pre-processing session. A single question visible before you submit: what is the density, what unit system does that correspond to, and does it match my geometry? The interrupt does not need to be long. It needs to be deliberate.

 

Step three: build the daily reflective practice. Not a summary of outputs — a diagnostic on reasoning. Pick one model you touched today, one assumption that felt smooth. Write about whether the smoothness was earned. Five to ten minutes. Over time, this builds the metacognitive sensitivity that makes the warning signals fire earlier and more precisely — before the model runs, not after.

 

Step four: use the Socratic prompts for any material card whose origin you did not personally verify. Vendor model, colleague's model, AI-generated properties — all of them get the same question: does this density match the unit system I am working in? If you cannot answer from memory, ask the AI to challenge it. Then verify.

 

The Individual Plan  1. Know your specific warning signal profile and write it down.  2. Design a deliberate interrupt at each one.  3. Build a daily 5-10 min diagnostic writing practice.  4. Apply Socratic AI prompts to all unverified material cards.

 

At the team level, the mechanism is cultural. The prediction machine is social — it suppresses the foundational question when asking it carries professional risk, and it surfaces the question when asking it is understood as a mark of rigor. In design reviews, in simulation check-ins, in vendor model handoffs, designate someone to ask the question that feels almost too obvious. What unit system is this model in? What density are we using and what does that correspond to? Rotate that role. Make the foundational question normal.

 

The culture you build is the prediction machine you get.

 

Chapter Thirteen

The Energy Budget of Understanding

 

Let me close with something from the neuroscience that I find genuinely hopeful.

 

Sharna Jamadar at Monash University studied the metabolic cost of cognition — how much extra energy the brain uses when you switch from resting to focused, effortful thinking. The answer: about five percent more. Ninety-five percent of the brain's energy budget goes to baseline operations — keeping ninety billion neurons alive and the prediction machine running. The actual cognitive work of deliberately checking something is a marginal addition.

 

The barrier to catching the little things is not energy. It's interruption. The prediction machine has a default behavior — suppress the familiar, attend to the novel — and overriding that default takes conscious effort. Not metabolic energy. Willpower. Daniel Kahneman described this as the tension between System One — fast, automatic, energy-conserving — and System Two — slow, deliberate, effortful. System One says: that density looks right, move on. System Two says: wait — what unit system does that correspond to?

 

The cost of invoking System Two is five percent. We resist it as if it were monumental, because the evolutionary wiring does not know the stakes have changed.

 

The real goal is not to fight this every day forever. The real goal is to build intuition deep enough that 7.85E-9 registers not just as a small number but as 'that is steel in tonne-mm-s, so my stresses will be in megapascals and my forces will be in newtons.' When you hear that in your own head without effort, you have rewired the prediction machine.

 

That is the goal. Not discipline. Not checklists — although checklists help. An internal model so complete that the gorilla cannot pass through the room without being seen.

 

Chapter Fourteen

The Holistic View

 

We live in an era where simulation software is increasingly powerful and increasingly accessible. The tools have gotten better. But the tools have also made it easier to skip the foundations. The more automated the process becomes, the more the cognitive system treats it as handled.

 

The Mars Climate Orbiter's navigation software was sophisticated. The Hubble mirror was polished by the most advanced fabrication process on the planet. The Gimli Glider's 767 was a state-of-the-art aircraft. Sophistication does not prevent foundational errors. It masks them — by producing outputs that look professional and credible regardless of whether the inputs were right.

 

In 1628, the Swedish warship Vasa was the pride of the fleet — sixty-four guns, ornate carvings, the most ambitious warship ever built. It sank in Stockholm harbor on its maiden voyage, less than a mile from shore. The two teams building opposite sides of the hull used different measurement systems: Swedish feet versus Amsterdam feet. The asymmetry contributed to a ship that was fatally top-heavy. Four centuries later, we are still making the same kind of error.

 

Now add artificial intelligence to this picture. We have arrived at a moment where the tools can generate material cards, interpret boundary conditions, assist with mesh decisions, and summarize simulation results — all with extraordinary fluency. The prediction machine, encountering that polished confidence at every stage of the simulation pipeline, is given more permission to stop checking than it has ever had before.

 

This does not make artificial intelligence a liability. It makes the foundational awareness described across both parts of this essay more important than ever. The engineer who understands why the Gimli Glider crashed, who knows the density fingerprint in their bones, who has built the four warning signals into their daily awareness, who narrates the critical checks before they run the model, who uses artificial intelligence to generate friction rather than fluency — that engineer is equipped to work with these powerful tools at the level they deserve.

 

Units are not a separate topic from materials, which are not a separate topic from element types, which are not a separate topic from boundary conditions, which are not a separate topic from the tools we use to build and verify our models. They are all one system. And the unit system is the base pair that all of it rests on.

 

The next time you open a model — especially one you did not build, especially one where artificial intelligence assisted in any part of the setup — think about your prediction machine. Think about the original gorilla. Think about the new gorilla: the polished material card that looks exactly right, the density value that came from SI training data in a tonne-millimeter-second model. And then spend five seconds — five seconds that cost your brain almost nothing — looking at the density value with your fovea, not your periphery.

 

Those five seconds might be the most important part of your entire analysis. Or they might prevent the next quiz from losing five points. Or they might be the difference between a product that works and one that fails in the field. Or — if you are working at the right altitude — they might save three hundred twenty-seven million dollars.

 

The mechanism is the same. The awareness is the cure. And awareness isn't about being pedantic. It's about understanding deeply enough that the invisible becomes visible — including the new kind of invisible that comes dressed in the language of intelligence.

 

 

 

Joseph McFadden Sr. is an Engineering Fellow at Zebra Technologies leading the MEAS (Mechanical Engineering Analysis & Services) team, and a Professor of Mechanical Engineering at Fairfield University. He has over 44 years of experience in failure analysis, CAE simulation, materials science, and expert witness work, and was one of three pioneers who brought Moldflow simulation technology to North America. He writes and teaches under the "Holistic Analyst" and "Building Intuition Before Equations" brands, exploring the intersection of engineering simulation, neuroscience, and systems thinking.

 

All thoughts and ideas are the author's own, formatted and expanded with Claude AI — not to be told what to write, but to debate and build upon the work.

 

This essay is part of the FEA Best Practices series. For more content, tools, and the Abaqus INP Analyzer, visit McFaddenCAE.com.

Previous
Previous

Engineering Mind Blindness,.. Part 1